Goto

Collaborating Authors

 biologically plausible learning rule


GAIT-prop: A biologically plausible learning rule derived from backpropagation of error

Neural Information Processing Systems

Traditional backpropagation of error, though a highly successful algorithm for learning in artificial neural network models, includes features which are biologically implausible for learning in real neural circuits. An alternative called target propagation proposes to solve this implausibility by using a top-down model of neural activity to convert an error at the output of a neural network into layer-wise and plausible'targets' for every unit. These targets can then be used to produce weight updates for network training. However, thus far, target propagation has been heuristically proposed without demonstrable equivalence to backpropagation. Here, we derive an exact correspondence between backpropagation and a modified form of target propagation (GAIT-prop) where the target is a small perturbation of the forward pass. Specifically, backpropagation and GAIT-prop give identical updates when synaptic weight matrices are orthogonal. In a series of simple computer vision experiments, we show near-identical performance between backpropagation and GAIT-prop with a soft orthogonality-inducing regularizer.


Review for NeurIPS paper: GAIT-prop: A biologically plausible learning rule derived from backpropagation of error

Neural Information Processing Systems

I believe this paper makes a meaningful contribution to this line of work and have changed my score accordingly to support acceptance. I do have a few comments that I hope you will consider as you prepare a final version of this paper, mainly coming from a neuroscience perspective. While the method described in this paper advances the family of target prop-related models and may serve as a foundation for future work in bio-plausible learning models, I don't think it is appropriate to describe it as more biologically plausible than backpropagation. One of the commonly cited biologically implausible features of backpropagation (weight symmetry) is replaced here by an equally implausible mechanism (perfect inverse models). It is true that bio-plausible ways of approximating inverses may exist, but there are also proposals for bio-plausible ways of maintaining weight symmetry (e.g.


Review for NeurIPS paper: GAIT-prop: A biologically plausible learning rule derived from backpropagation of error

Neural Information Processing Systems

This paper presents a biologically plausible learning rule as an alternative to standard back-propagation. This is a heavily studied area in ML, with strong interest from both the ML and computational neuroscience communities. The reviewers agreed that this work presents an exciting and important contribution over the existing literature on this problem. There was extensive discussion between reviewers, with two reviewers championing the paper for acceptance. The lower scoring reviewers cited the empirical evaluation as a weakness of the paper, while others argued that the idea on its own was sufficiently interesting to the community.


GAIT-prop: A biologically plausible learning rule derived from backpropagation of error

Neural Information Processing Systems

Traditional backpropagation of error, though a highly successful algorithm for learning in artificial neural network models, includes features which are biologically implausible for learning in real neural circuits. An alternative called target propagation proposes to solve this implausibility by using a top-down model of neural activity to convert an error at the output of a neural network into layer-wise and plausible'targets' for every unit. These targets can then be used to produce weight updates for network training. However, thus far, target propagation has been heuristically proposed without demonstrable equivalence to backpropagation. Here, we derive an exact correspondence between backpropagation and a modified form of target propagation (GAIT-prop) where the target is a small perturbation of the forward pass.


Biologically Plausible Learning Rules for Perceptual Systems that Maximize Mutual Information

Liu, Tao

arXiv.org Artificial Intelligence

Consider a neural perceptual system being exposed to an external environment. The system has certain internal state to represent external events. There is strong behavioral and neural evidence(e.g., Ernst and Banks, 2002; Gabbiani and Koch, 1998) that the internal representation is intrinsically probabilistic(Knill and Pouget, 2004), in line with the statistical properties of the environment. We mark the input signal as x. The perceptual representation would be a probability distribution conditional on x, denoted as p(y x). According to the Infomax principle (Attneave, 1954; Barlow et al., 1961; Linsker, 1988), the system's goal is to maximize the mutual information (MI) between the input x and the output (neuronal response) y, which can be written as max I(x;y), (1.1)


Are skip connections necessary for biologically plausible learning rules?

Im, Daniel Jiwoong, Patil, Rutuja, Branson, Kristin

arXiv.org Machine Learning

Backpropagation is the workhorse of deep learning, however, several other biologically-motivated learning rules have been introduced, such as random feedback alignment and difference target propagation. None of these methods have produced a competitive performance against backpropagation. In this paper, we show that biologically-motivated learning rules with skip connections between intermediate layers can perform as well as backpropagation on the MNIST dataset and are robust to various sets of hyper-parameters.